32 research outputs found

    Mixed physical and virtual design environments for digital fabrication

    Get PDF
    Digital Fabrication (3D printing, laser-cutting or CNC milling) enables the automated fabrication of physical objects from digital models. This technology is becoming more readily available and ubiquitous, as digital fabrication machines become more capable and affordable. When it comes to designing the objects that are to be fabricated however, there are still barriers for novices and inconveniences for experts. Through digital fabrication, physical objects are created from digital models. The digital models are currently designed in virtual design environments, which separates the world we design in from the world we design for. This separation hampers design processes of experienced users and presents barriers to novices. For example, manipulating objects in virtual spaces is difficult, but comes naturally in the physical world. Further, in a virtual environment, we cannot easily integrate existing physical objects or experience the object we are designing in its future context (e.g., try out a game controller during design). This lack of reflection impedes designer's spatial understanding in virtual design environments. To enable our virtual creations to become physical reality, we have to posses an ample amount of design and engineering knowledge, which further steepens the learning curve for novices. Lastly, as we are physically separated from our creation - until it is fabricated - we loose direct engagement with the material and object itself, impacting creativity. We follow a research through design approach, in which we take up the role as interaction designers and engineers. Based on four novel interaction concepts, we explore how the physical world and design environments can be brought closer together, and address the problems caused their prior separation. As engineers, we implement each of these concepts in a prototype system, demonstrating that they can be implemented. Using the systems, we evaluate the concepts and how the concepts alleviate the aforementioned problems, and that the design systems we create are capable of producing useful objects. In this thesis, we make four main contributions to the body of digital fabrication related HCI knowledge. Each contribution consists of an interaction concept which addresses a subset of the problems, caused by the separation of virtual design environment, and physical target world. We evaluate the concepts through prototype implementations, example walkthroughs and where appropriate user-studies, demonstrating how the concepts alleviate the problems they address. For each concept and system, we describe the design rationale, and present technical contributions towards their implementation. The results of this thesis have implications for different user audiences, design processes, the artifacts users design and domains outside of digital fabrication. Through our concepts and systems, we lower barriers for novices to utilize digital fabrication. For experienced designers, we make existing design processes more convenient and efficient. We ease the design of artifacts that reuse existing objects, or that combine organic and geometrically structured design. Lastly, the novel interaction concepts (and on a technical level, the systems) we present, which blur the lines between physical and virtual space, can serve as basis for future interaction design and HCI research

    Shape Display Shader Language (SDSL): a new programming model for shape changing displays

    Get PDF
    Shape-changing displays' dynamic physical affordances have inspired a range of novel hardware designs to support new types of interaction. Despite rapid technological progress, the community lacks a common programming model for developing applications for these visually and physically-dynamic display surfaces. This results in complex, hardware-specific, custom-code that requires significant development effort and prevents researchers from easily building on and sharing their applications across hardware platforms. As a first attempt to address these issues we introduce SDSL, a Shape-Display Shader Language for easily programming shape-changing displays in a hardware-independent manner. We introduce the (graphics-derived) pipeline model of SDSL, an open-source implementation that includes a compiler, runtime, IDE, debugger, and simulator, and show demonstrator applications running on two shape-changing hardware setups

    ReForm:integrating physical and digital design through bidirectional fabrication

    Get PDF
    Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process

    SPATA: Spatio-tangible tools for fabrication-aware design

    Get PDF
    The physical tools used when designing new objects for digital fabrication are mature, yet disconnected from their virtual accompaniments. SPATA is the digital adaptation of two spatial measurement tools, that explores their closer integration into virtual design environments. We adapt two of the traditional measurement tools: calipers and protractors. Both tools can measure, transfer, and present size and angle. Their close integration into different design environments makes tasks more fluid and convenient. We describe the tools' design, a prototype implementation, integration into different environments, and application scenarios validating the concept

    ShapeClip: towards rapid prototyping with shape-changing displays for designers

    Get PDF
    This paper presents ShapeClip: a modular tool capable of transforming any computer screen into a z-actuating shape-changing display. This enables designers to produce dynamic physical forms by "clipping" actuators onto screens. ShapeClip displays are portable, scalable, fault-tolerant, and support runtime re-arrangement. Users are not required to have knowledge of electronics or programming, and can develop motion designs with presentation software, image editors, or web-technologies. To evaluate ShapeClip we carried out a full-day workshop with expert designers. Participants were asked to generate shape-changing designs and then construct them using ShapeClip. ShapeClip enabled participants to rapidly and successfully transform their ideas into functional systems

    AmbiGaze:direct control of ambient devices by gaze

    Get PDF
    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes. We evaluated the system in a user study and found that AmbiGaze enables robust gaze-only interaction with many devices, from multiple positions in the environment, in a spontaneous and comfortable manner

    3D-Guided Face Manipulation of 2D Images for the Prediction of Post-Operative Outcome after Cranio-Maxillofacial Surgery

    Get PDF
    Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient\u27s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient\u27s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face\u27s shape. After training on "in-the-wild" images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans

    Exploring interactions with physically dynamic bar charts

    Get PDF
    Visualizations such as bar charts help users reason about data, but are mostly screen-based, rarely physical, and almost never physical and dynamic. This paper investigates the role of physically dynamic bar charts and evaluates new interactions for exploring and working with datasets rendered in dynamic physical form. To facilitate our exploration we constructed a 10x10 interactive bar chart and designed interactions that supported fundamental visualisation tasks, specifically; annotation, filtering, organization, and navigation. The interactions were evaluated in a user study with 17 participants. Our findings identify the preferred methods of working with the data for each task i.e. directly tapping rows to hide bars, highlight the strengths and limitations of working with physical data, and discuss the challenges of integrating the proposed interactions together into a larger data exploration system. In general, physical interactions were intuitive, informative, and enjoyable, paving the way for new explorations in physical data visualizations

    Generative-Adversarial-Network-Based Data Augmentation for the Classification of Craniosynostosis

    Get PDF
    Craniosynostosis is a congenital disease characterized by the premature closure of one or multiple sutures of the infant’s skull. For diagnosis, 3D photogrammetric scans are a radiation-free alternative to computed tomography. However, data is only sparsely available and the role of data augmentation for the classification of craniosynostosis has not yet been analyzed. In this work, we use a 2D distance map representation of the infants’ heads with a convolutional-neural-network-based classifier and employ a generative adversarial network (GAN) for data augmentation. We simulate two data scarcity scenarios with 15% and 10% training data and test the influence of different degrees of added synthetic data and balancing underrepresented classes. We used total accuracy and F1-score as a metric to evaluate the final classifiers. For 15% training data, the GAN-augmented dataset showed an increased F1-score up to 0.1 and classification accuracy up to 3 %. For 10% training data, both metrics decreased. We present a deep convolutional GAN capable of creating synthetic data for the classification of craniosynostosis. Using a moderate amount of synthetic data using a GAN showed slightly better performance, but had little effect overall. The simulated scarcity scenario of 10% training data may have limited the model’s ability to learn the underlying data distribution

    Laplace-Beltrami Refined Shape Regression Applied to Neck Reconstruction for Craniosynostosis Patients Combining posterior shape models with a Laplace-Beltrami based approach for shape reconstruction

    Get PDF
    This contribution is part of a project concerning the creation of an artificial dataset comprising 3D head scans of craniosynostosis patients for a deep-learning-basedclassification. To conform to real data, both head and neck are required in the 3D scans. However, during patient recording, the neck is often covered by medical staff. Simply pasting an arbitrary neck leaves large gaps in the 3D mesh. We therefore use a publicly available statistical shape model (SSM) for neck reconstruction. However, mostSSMs of the head are constructed using healthy subjects, so the full head reconstruction loses the craniosynostosis-specific head shape. We propose a method to recover the neck while keeping the pathological head shape intact. We propose a Laplace-Beltrami-based refinement step to deform the posterior mean shape of the full head model towards the pathological head. The artificial neck is created using the publicly available Liverpool-York-Model. We apply our method to construct artificial necks for head scans of 50 scaphocephaly patients. Our method reduces mean vertex correspondence error by approximately 1.3 mm compared to the ordinary posterior mean shape, preserves the pathological head shape, and creates a continuous transition between neck and head. The presented method showed good results for reconstructing a plausible neck to craniosynostosis patients. Easily generalized it might also be applicable to other pathological shapes
    corecore